4,506 research outputs found

    Light Field Blind Motion Deblurring

    Full text link
    We study the problem of deblurring light fields of general 3D scenes captured under 3D camera motion and present both theoretical and practical contributions. By analyzing the motion-blurred light field in the primal and Fourier domains, we develop intuition into the effects of camera motion on the light field, show the advantages of capturing a 4D light field instead of a conventional 2D image for motion deblurring, and derive simple methods of motion deblurring in certain cases. We then present an algorithm to blindly deblur light fields of general scenes without any estimation of scene geometry, and demonstrate that we can recover both the sharp light field and the 3D camera motion path of real and synthetically-blurred light fields.Comment: To be presented at CVPR 201

    Managing Exchange Rate Volatility: A Comparative Counterfactual Analysis of Singapore 1994 to 2003

    Get PDF
    The objective of this paper is see how well Singapore’s exchange rate regime has coped with exchange rate volatility before and after the Asian financial crisis by comparing the performance of Singapore’s actual regime in minimising the volatility of the nominal effective exchange rate (NEER) and the bilateral rate against the US$ against some counterfactual regimes and the corresponding performance of eight other East Asian countries. In contrast to previous counterfactual exercises, such as Williamson (1998a) and Ohno (1999) which compute the weights for effective exchange rates on the basis of simple bloc aggregates, we apply a more disaggregated methodology using a larger number of trade partners. We also utilize ARCH/GARCH techniques to obtain estimates of heteroskedastic variances to better capture the time-varying characteristics of volatility for the actual and simulated exchange rate regimes. Our findings confirm that Singapore’s managed floating exchange rate system has delivered relatively low currency volatility. Although there are gains in volatility reduction for all countries in the sample from the adoption of either a unilateral or common basket peg, particularly post-crisis, these gains are relatively low for Singapore, largely because low actual volatility. Finally, there are additional gains for nondollar peggers from stabilizing intra-EA exchange rates against the dollar if they were to adopt a basket peg, especially post-crisis, but the gains for Singapore are again relatively modest.East Asia, exchange rates, counterfactuals.

    Learning to Synthesize a 4D RGBD Light Field from a Single Image

    Full text link
    We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point. Please see our supplementary video at https://youtu.be/yLCvWoQLnmsComment: International Conference on Computer Vision (ICCV) 201

    Aperture Supervision for Monocular Depth Estimation

    Full text link
    We present a novel method to train machine learning algorithms to estimate scene depths from a single image, by using the information provided by a camera's aperture as supervision. Prior works use a depth sensor's outputs or images of the same scene from alternate viewpoints as supervision, while our method instead uses images from the same viewpoint taken with a varying camera aperture. To enable learning algorithms to use aperture effects as supervision, we introduce two differentiable aperture rendering functions that use the input image and predicted depths to simulate the depth-of-field effects caused by real camera apertures. We train a monocular depth estimation network end-to-end to predict the scene depths that best explain these finite aperture images as defocus-blurred renderings of the input all-in-focus image.Comment: To appear at CVPR 2018 (updated to camera ready version

    Burst Denoising with Kernel Prediction Networks

    Full text link
    We present a technique for jointly denoising bursts of images taken from a handheld camera. In particular, we propose a convolutional neural network architecture for predicting spatially varying kernels that can both align and denoise frames, a synthetic data generation approach based on a realistic noise formation model, and an optimization guided by an annealed loss function to avoid undesirable local minima. Our model matches or outperforms the state-of-the-art across a wide range of noise levels on both real and synthetic data.Comment: To appear in CVPR 2018 (spotlight). Project page: http://people.eecs.berkeley.edu/~bmild/kpn
    • …
    corecore